33 research outputs found
Building task-oriented machine translation systems
La principal meta de esta tesis es desarrollar sistemas de traduccion interactiva que presenten mayor
sinergia con sus usuarios potenciales. Por ello, el objetivo es hacer los sistemas estado del arte mas
ergonomicos, intuitivos y eficientes, con el fin de que el experto humano se sienta mas comodo al utilizarlos.
Con este fin se presentan diferentes t�ecnicas enfocadas a mejorar la adaptabilidad y el tiempo
de respuesta de los sistemas de traduccion automatica subyacentes, as�ÿ como tambien se presenta una
estrategia cuya finalidad es mejorar la interaccion hombre-m�aquina. Todo ello con el proposito ultimo
de rellenar el hueco existente entre el estado del arte en traduccion automatica y las herramientas que los
traductores humanos tienen a su disposici�on.
En lo que respecta al tiempo de respuesta de los sistemas de traducci�on autom�atica, en esta tesis se
presenta una t�ecnica de poda de los par�ametros de los modelos de traducci�on actuales, cuya intuici�on est�a
basada en el concepto de segmentaci�on biling¤ue, pero que termina por evolucionar hacia una estrategia de
re-estimaci�on de dichos par�ametros. Utilizando esta estrategia se obtienen resultados experimentales que
demuestran que es posible podar la tabla de segmentos hasta en un 97%, sin mermar por ello la calidad
de las traducciones obtenidas. Adem�as, estos resultados son coherentes en diferentes pares de lenguas,
lo cual evidencia que la t�ecnica que se presenta aqu�ÿ es efectiva en un entorno de traducci�on autom�atica
tradicional, y por lo tanto podr�ÿa ser utilizada directamente en un escenario de post-edici�on. Sin embargo,
los experimentos llevados a cabo en traducci�on interactiva son ligeramente menos convincentes, pues
implican la necesidad de llegar a un compromiso entre el tiempo de respuesta y la calidad de los sufijos
producidos.
Por otra parte, se presentan dos t�ecnicas de adaptaci�on, con el prop�osito de mejorar la adaptabilidad
de los sistemas de traducci�on autom�atica. La primeraSanchis Trilles, G. (2012). Building task-oriented machine translation systems [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/17174Palanci
Passive-aggressive for on-line learning in statistical machine translation
New variations on the application of the passive-aggressive algorithm
to statistical machine translation are developed and compared to previously existing
approaches. In online adaptation, the system needs to adapt to real-world
changing scenarios, where training and tuning only take place when the system
is set-up for the first time. Post-edit information, as described by a given quality
measure, is used as valuable feedback within the passive-aggressive framework,
adapting the statistical models on-line. First, by modifying the translation model
parameters, and alternatively, by adapting the scaling factors present in stateof-
the-art SMT systems. Experimental results show improvements in translation
quality by allowing the system to learn on a sentence-by-sentence basis.This paper is based upon work supported by the EC (FEDER/FSE) and the Spanish MICINN under projects MIPRCV “Consolider Ingenio 2010” (CSD2007-00018) and iTrans2 (TIN2009-14511). Also supported by the Spanish MITyC under the erudito.com (TSI-020110-2009-439) project, by the Generalitat Valenciana under grant Prometeo/2009/014 and scholarship GV/2010/067 and by the UPV under grant 20091027.Martínez Gómez, P.; Sanchis Trilles, G.; Casacuberta Nolla, F. (2011). Passive-aggressive for on-line learning in statistical machine translation. En Pattern Recognition and Image Analysis. Springer Verlag (Germany). 6669:240-247. https://doi.org/10.1007/978-3-642-21257-4_30S2402476669Barrachina, S., et al.: Statistical approaches to computer-assisted translation. Computational Linguistics 35(1), 3–28 (2009)Callison-Burch, C., Bannard, C., Schroeder, J.: Improving statistical translation through editing. In: Proc. of 9th EAMT Workshop Broadening Horizons of Machine Translation and its Applications, Malta (April 2004)Callison-Burch, C., Fordyce, C., Koehn, P., Monz, C., Schroeder, J.: (meta-) evaluation of machine translation. In: Proc. of the Workshop on SMT, pp. 136–158. ACL (June 2007)Crammer, K., Dekel, O., Keshet, J., Shalev-Shwartz, S., Singer, Y.: Online passive-aggressive algorithms. Journal of Machine Learning Research 7, 551–585 (2006)Kneser, R., Ney, H.: Improved backing-off for m-gram language modeling. In: IEEE Int. Conf. on Acoustics, Speech and Signal Processing II, pp. 181–184 (May 1995)Koehn, P.: Europarl: A parallel corpus for statistical machine translation. In: Proc. of the MT Summit X, pp. 79–86 (2005)Koehn, P., et al.: Moses: Open source toolkit for statistical machine translation. In: Proc. of the ACL Demo and Poster Sessions, Prague, Czech Republic, pp. 177–180 (2007)Och, F., Ney, H.: Discriminative training and maximum entropy models for statistical machine translation. In: Proc. of the ACL 2002, pp. 295–302 (2002)Och, F.: Minimum error rate training for statistical machine translation. In: Dignum, F.P.M. (ed.) ACL 2003. LNCS (LNAI), vol. 2922, pp. 160–167. Springer, Heidelberg (2004)Ortiz-Martínez, D., García-Varea, I., Casacuberta, F.: Online learning for interactive statistical machine translation. In: Proceedings of NAACL HLT, Los Angeles (June 2010)Papineni, K., Roukos, S., Ward, T.: Maximum likelihood and discriminative training of direct translation models. In: Proc. of ICASSP 1998, pp. 189–192 (1998)Papineni, K., Roukos, S., Ward, T., Zhu, W.: Bleu: A method for automatic evaluation of machine translation. In: Proc. of ACL 2002, pp. 311–318 (2002)Reverberi, G., Szedmak, S., Cesa-Bianchi, N., et al.: Deliverable of package 4: Online learning algorithms for computer-assisted translation (2008)Sanchis-Trilles, G., Casacuberta, F.: Log-linear weight optimisation via bayesian adaptation in statistical machine translation. In: Proc. of COLING 2010, Beijing, China, pp. 1077–1085 (August 2010)Snover, M., et al.: A study of translation edit rate with targeted human annotation. In: Proc. of AMTA 2006, Cambridge, Massachusetts, USA, pp. 223–231 (August 2006)Zens, R., Och, F., Ney, H.: Phrase-based statistical machine translation. In: Jarke, M., Koehler, J., Lakemeyer, G. (eds.) KI 2002. LNCS (LNAI), vol. 2479, pp. 18–32. Springer, Heidelberg (2002
Log-Linear Weight Optimization Using Discriminative Ridge Regression Method in Statistical Machine Translation
[EN] We present a simple and reliable method for estimating the log-linear weights of a state-of-the-art machine translation system, which takes advantage of the method known as discriminative ridge regression (DRR). Since inappropriate weight estimations lead to a wide variability of translation quality results, reaching a reliable estimate for such weights is critical for machine translation research. For this reason, a variety of methods have been proposed to reach reasonable estimates. In this paper, we present an algorithmic description and empirical results proving that DRR, as applied in a pseudo-batch scenario, is able to provide comparable translation quality when compared to state-of-the-art estimation methods (i.e., MERT [1] and MIRA [2]). Moreover, the empirical results reported are coherent across different corpora and language pairs.The research leading to these results has received funding fromthe Generalitat Valenciana under grant PROMETEOII/2014/030 and the FPI (2014) grant by Universitat Politècnica de València.Chinea-Ríos, M.; Sanchis Trilles, G.; Casacuberta Nolla, F. (2017). Log-Linear Weight Optimization Using Discriminative Ridge Regression Method in Statistical Machine Translation. Lecture Notes in Computer Science. 10255:32-41. doi:10.1007/978-3-319-58838-4_4S324110255Och, F.J.: Minimum error rate training in statistical machine translation. In: Proceedings of ACL, pp. 160–167 (2003)Crammer, K., Dekel, O., Keshet, J., Shalev-Shwartz, S., Singer, Y.: Online passive-aggressive algorithms. J. Mach. Learn. Res. 7, 551–585 (2006)Och, F.J., Ney, H.: A systematic comparison of various statistical alignment models. Comput. Linguist. 29, 19–51 (2003)Koehn, P.: Statistical Machine Translation. Cambridge University Press, Cambridge (2010)Martínez-Gómez, P., Sanchis-Trilles, G., Casacuberta, F.: Online adaptation strategies for statistical machine translation in post-editing scenarios. Pattern Recogn. 45(9), 3193–3203 (2012)Cherry, C., Foster, G.: Batch tuning strategies for statistical machine translation. In: Proceedings of NAACL, pp. 427–436 (2012)Sanchis-Trilles, G., Casacuberta, F.: Log-linear weight optimisation via Bayesian adaptation in statistical machine translation. In: Proceedings of ACL, pp. 1077–1085 (2010)Marie, B., Max, A.: Multi-pass decoding with complex feature guidance for statistical machine translation. In: Proceedings of ACL, pp. 554–559 (2015)Hopkins, M., May, J.: Tuning as ranking. In: Proceedings of EMNLP, pp. 1352–1362 (2011)Stauffer, C., Grimson, W.E.L.: Learning patterns of activity using real-time tracking. Pattern Anal. Mach. Intell. 22(8), 747–757 (2000)Koehn, P., Hoang, H., Birch, A., Callison-Burch, C., Federico, M., Bertoldi, N., Cowan, B., Shen, W., Moran, C., Zens, R., Dyer, C., Bojar, O., Constantin, A., Herbst, E.: Moses: open source toolkit for statistical machine translation. In: Proceedings of ACL, pp. 177–180 (2007)Kneser, R., Ney, H.: Improved backing-off for m-gram language modeling. In: Proceedings of ICASSP, pp. 181–184 (1995)Stolcke, A.: Srilm-an extensible language modeling toolkit. In: Proceedings of ICSLP, pp. 901–904 (2002)Papineni, K., Roukos, S., Ward, T., Zhu, W.-J.: BLEU: a method for automatic evaluation of machine translation. In: Proceedings of ACL, pp. 311–318 (2002)Chen, B., Cherry, C.: A systematic comparison of smoothing techniques for sentence-level BLEU. In: Proceedings of WMT, pp. 362–367 (2014)Snover, M., Dorr, B.J., Schwartz, R., Micciulla, L., Makhoul, J.: A study of translation edit rate with targeted human annotation. In: Proceedings of AMTA, pp. 223–231 (2006)Tiedemann, J.: News from opus-a collection of multilingual parallel corpora with tools and interfaces. In: Proceedings of RANLP, pp. 237–248 (2009)Tiedemann, J.: Parallel data, tools and interfaces in opus. In: Proceedings of LREC, pp. 2214–2218 (2012
Discriminative ridge regression algorithm for adaptation in statistical machine translation
[EN] We present a simple and reliable method for estimating the log-linear weights of a state-of-the-art machine translation system, which takes advantage of the method known as discriminative ridge regression (DRR). Since inappropriate weight estimations lead to a wide variability of translation quality results, reaching a reliable estimate for such weights is critical for machine translation research. For this reason, a variety of methods have been proposed to reach reasonable estimates. In this paper, we present an algorithmic description and empirical results proving that DRR is able to provide comparable translation quality when compared to state-of-the-art estimation methods [i.e. MERT and MIRA], with a reduction in computational cost. Moreover, the empirical results reported are coherent across different corpora and language pairs.The research leading to these results were partially supported by projects CoMUN-HaT-TIN2015-70924-C2-1-R (MINECO/FEDER) and PROMETEO/2018/004. We also acknowledge NVIDIA for the donation of a GPU used in this work.Chinea-Ríos, M.; Sanchis-Trilles, G.; Casacuberta Nolla, F. (2019). Discriminative ridge regression algorithm for adaptation in statistical machine translation. Pattern Analysis and Applications. 22(4):1293-1305. https://doi.org/10.1007/s10044-018-0720-5S12931305224Barrachina S, Bender O, Casacuberta F, Civera J, Cubel E, Khadivi S, Lagarda A, Ney H, Tomás J, Vidal E et al (2009) Statistical approaches to computer-assisted translation. Comput Ling 35(1):3–28Bojar O, Buck C, Federmann C, Haddow B, Koehn P, Monz C, Post M, Specia L (eds) (2014) Proceedings of the ninth workshop on statistical machine translation. Association for Computational LinguisticsBrown PF, Pietra VJD, Pietra SAD, Mercer RL (1993) The mathematics of statistical machine translation: parameter estimation. Comput Ling 19:263–311Callison-Burch C, Koehn P, Monz C, Peterson K, Przybocki M, Zaidan OF (2010) Findings of the 2010 joint workshop on statistical machine translation and metrics for machine translation. In: Proceedings of the annual meeting of the association for computational linguistics, pp 17–53Chen B, Cherry C (2014) A systematic comparison of smoothing techniques for sentence-level bleu. In: Proceedings of the workshop on statistical machine translation, pp 362–367Cherry C, Foster G (2012) Batch tuning strategies for statistical machine translation. In: Proceedings of the North American chapter of the association for computational linguistics, pp 427–436Clark JH, Dyer C, Lavie A, Smith NA (2011) Better hypothesis testing for statistical machine translation: controlling for optimizer instability. In: Proceedings of the annual meeting of the association for computational linguistics, pp 176–181Crammer K, Dekel O, Keshet J, Shalev-Shwartz S, Singer Y (2006) Online passive-aggressive algorithms. J Mach Learn Res 7:551–585Hasler E, Haddow B, Koehn P (2011) Margin infused relaxed algorithm for moses. Prague Bull Math Ling 96:69–78Hopkins M, May J (2011) Tuning as ranking. In: Proceedings of the conference on empirical methods in natural language processing, pp 1352–1362Kneser R, Ney H (1995) Improved backing-off for m-gram language modeling. In: Proceedings of the international conference on acoustics, speech and signal processing, pp 181–184Koehn P (2005) Europarl: a parallel corpus for statistical machine translation. In: Proceedings of the machine translation summit, pp 79–86Koehn P (2010) Statistical machine translation. Cambridge University Press, CambridgeKoehn P, Hoang H, Birch A, Callison-Burch C, Federico M, Bertoldi N, Cowan B, Shen W, Moran C, Zens R, Dyer C, Bojar O, Constantin A, Herbst E (2007) Moses: open source toolkit for statistical machine translation. In: Proceedings of the annual meeting of the association for computational linguistics, pp 177–180Lavie MDA (2014) Meteor universal: language specific translation evaluation for any target language. In: Proceedings of the annual meeting of the association for computational linguistics, pp 376–387Marie B, Max A (2015) Multi-pass decoding with complex feature guidance for statistical machine translation. In: Proceedings of the annual meeting of the association for computational linguistics, pp 554–559Martínez-Gómez P, Sanchis-Trilles G, Casacuberta F (2012) Online adaptation strategies for statistical machine translation in post-editing scenarios. Pattern Recogn 45(9):3193–3203Nakov P, Vogel S (2017) Robust tuning datasets for statistical machine translation. arXiv:1710.00346Neubig G, Watanabe T (2016) Optimization for statistical machine translation: a survey. Comput Ling 42(1):1–54Och FJ (2003) Minimum error rate training in statistical machine translation. In: Proceedings of the annual meeting of the association for computational linguistics, pp 160–167Och FJ, Ney H (2003) A systematic comparison of various statistical alignment models. Comput Ling 29:19–51Papineni K, Roukos S, Ward T, Zhu WJ (2002) Bleu: a method for automatic evaluation of machine translation. In: Proceedings of the international conference on acoustics, speech and signal processing, pp 311–318Sanchis-Trilles G, Casacuberta F (2010) Log-linear weight optimisation via Bayesian adaptation in statistical machine translation. In: Proceedings of the annual meeting of the association for computational linguistics, pp 1077–1085Sanchis-Trilles G, Casacuberta F (2015) Improving translation quality stability using Bayesian predictive adaptation. Comput Speech Lang 34(1):1–17Snover M, Dorr B, Schwartz R, Micciulla L, Makhoul J (2006) A study of translation edit rate with targeted human annotation. In: Proceedings of the annual meeting of the association for machine translation in the Americas, pp 223–231Sokolov A, Yvon F (2011) Minimum error rate training semiring. In: Proceedings of the annual conference of the European association for machine translation, pp 241–248Stauffer C, Grimson WEL (2000) Learning patterns of activity using real-time tracking. Pattern Anal Mach Intell 22(8):747–757Stolcke A (2002) Srilm—an extensible language modeling toolkit. In: Proceedings of the international conference on spoken language processing, pp 901–904Tiedemann J (2009) News from opus—a collection of multilingual parallel corpora with tools and interfaces. In: Proceedings of the recent advances in natural language processing, pp 237–248Tiedemann J (2012) Parallel data, tools and interfaces in opus. In: Proceedings of the language resources and evaluation conference, pp 2214–221
Domain adaptation problem in statistical machine translation systems
Globalization suddenly brings many people from different country to interact with each other, requiring them to be able to speak several languages. Human translators are slow and expensive, we find the necessity of developing machine
translators to automatize the task. Several approaches of Machine translation have
been develop by the researchers. In this work, we use the Statistical Machine Translation approach. Statistical Machine Translation systems perform poorly when applied on new domains. The domain adaptation problem has recently gained interest in Statistical Machine Translation. The basic idea is to improve the performance of
the system trained and tuned with different domain than the one to be translated.
This article studies different paradigms of domain adaptation. The results report improvements compared with a system trained only with in-domain data and trained with all the available data.Chinea Ríos, M.; Sanchis Trilles, G.; Casacuberta Nolla, F. (2015). Domain adaptation problem in statistical machine translation systems. En Artificial Intelligence Research and Development. IOS Press. 205-213. doi:10.3233/978-1-61499-578-4-205S20521
Online adaptation strategies for statistical machine translation in post-editing scenarios
[EN] One of the most promising approaches to machine translation consists in formulating the problem by
means of a pattern recognition approach. By doing so, there are some tasks in which online adapta-
tion is needed in order to adapt the system to changing scenarios. In the present work, we perform an
exhaustive comparison of four online learning algorithms when combined with two adaptation
strategies for the task of online adaptation in statistical machine translation. Two of these algorithms
are already well-known in the pattern recognition community, such as the perceptron and passive-
aggressive algorithms, but here they are thoroughly analyzed for their applicability in the statistical
machine translation task. In addition, we also compare them with two novel methods, i.e., Bayesian
predictive adaptation and discriminative ridge regression. In statistical machine translation, the most
successful approach is based on a log-linear approximation to a posteriori distribution. According to
experimental results, adapting the scaling factors of this log-linear combination of models using
discriminative ridge regression or Bayesian predictive adaptation yields the best performance.This paper is based upon work supported by the EC (FP7) under CasMaCat (287576) project and the EC (FEDER/FSE) and the Spanish MICINN under projects MIPRCV "Consolider Ingenio 2010" (CSD2007-00018) and iTrans2 (TIN2009-14511). This work is also supported by the Spanish MITyC under the erudito.com (TSI-020110-2009-439) project, by the Generalitat Valenciana under Grant Prometeo/2009/014, and by the UPV under Grant 20091027. The authors would like to thank the anonymous reviewers for their useful and constructive comments.Martínez Gómez, P.; Sanchis Trilles, G.; Casacuberta Nolla, F. (2012). Online adaptation strategies for statistical machine translation in post-editing scenarios. Pattern Recognition. 45(9):3193-3203. https://doi.org/10.1016/j.patcog.2012.01.011S3193320345
Data selection for NMT using Infrequent n-gram Recovery
Neural Machine Translation (NMT) has achieved promising results comparable with Phrase-Based Statistical Machine Translation (PBSMT). However, to train a neural translation engine, much more powerful machines are required than those required to develop translation engines based on PBSMT. One solution to reduce the training cost of NMT systems is the reduction of the training corpus through data selection (DS) techniques. There are many DS techniques applied in PBSMT which bring good results. In this work, we show that the data selection technique based on infrequent n-gram occurrence described in (Gascó et al., 2012) commonly used for PBSMT systems also works well for NMT systems. We focus our work on selecting data according to specific corpora using the previously mentioned technique. The specific-domain corpora used for our experiments are IT domain and medical domain. The DS technique significantly reduces the execution time required to train the model between 87% and 93%. Also, it improves translation quality by up to 2.8 BLEU points. The improvements are obtained with just a small fraction of the data that accounts for between 6% and 20% of the total data
Creating the best development corpus for Statistical Machine Translation systems
We propose and study three different novel approaches for tackling the problem of development set selection in Statistical Machine Translation. We focus on a scenario where a machine translation system is leveraged for translating a specific test set, without further data from the domain at hand. Such test set stems from a real application of machine translation, where the texts of a specific e-commerce were to be translated. For developing our development-set selection techniques, we first conducted experiments in a controlled scenario, where labelled data from different domains was available, and evaluated the techniques both with classification and translation quality metrics. Then, the best-performing techniques were evaluated on the e-commerce data at hand, yielding consistent improvements across two language directions.The research leading to these results were partially supported by projects CoMUN-HaT-TIN2015-70924-C2-1-R (MINECO/FEDER) and PROMETEO/2018/004
Implementing a neural machine translation engine for mobile devices: the Lingvanex use case
In this paper, we present the challenge entailed by implementing a mobile version of a neural machine translation system, where the goal is to maximise translation quality while minimising model size. We explain the whole process of implementing the translation engine on an English–Spanish example and we describe all the difficulties found and the solutions implemented. The main techniques used in this work are data selection by means of Infrequent n-gram Recovery, appending a special word at the end of each sentence, and generating additional samples without the final punctuation marks. The last two techniques were devised with the purpose of achieving a translation model that generates sentences without the final full stop, or other punctuation marks. Also, in this work, the Infrequent n-gram Recovery was used for the first time to create a new corpus, and not enlarge the in-domain dataset. Finally, we get a small size model with quality good enough to serve for daily use.Work partially supported by MINECO under grant DI-15-08169 and by Sciling under its R+D programme
Does more data always yield better translations?
Nowadays, there are large amounts of data
available to train statistical machine translation
systems. However, it is not clear
whether all the training data actually help
or not. A system trained on a subset of such
huge bilingual corpora might outperform
the use of all the bilingual data. This paper
studies such issues by analysing two training
data selection techniques: one based
on approximating the probability of an indomain
corpus; and another based on infrequent
n-gram occurrence. Experimental
results not only report significant improvements
over random sentence selection but
also an improvement over a system trained
with the whole available data. Surprisingly,
the improvements are obtained with just a
small fraction of the data that accounts for
less than 0.5% of the sentences. Afterwards,
we show that a much larger room for
improvement exists, although this is done
under non-realistic conditions.The research leading to these results has received funding from the European Union Seventh Framework Programme (FP7/2007-2013) under
grant agreement nr. 287755. This work was also supported by the Spanish MEC/MICINN under the MIPRCV ”Consolider Ingenio 2010” program (CSD2007-00018), and iTrans2 (TIN2009-14511) project. Also supported by the Spanish MITyC under the erudito.com (TSI-020110-2009-439) project and Instituto Tecnológico de León, DGEST-PROMEP y CONACYT, México.Gascó Mora, G.; Rocha Sánchez, MA.; Sanchis Trilles, G.; Andrés Ferrer, J.; Casacuberta Nolla, F. (2012). Does more data always yield better translations?. Association for Computational Linguistics. 152-161. http://hdl.handle.net/10251/35214S15216